With increasing scale, large language models demonstrate both quantitative improvement and new qualitative capabilities, especially as zero-shot learners, like GPT-3. However, these results rely heavily on delicate prompt design and large computation. In this work, we explore whether the strong zero-shot ability could be achieved at a smaller model scale without any external supervised data. To achieve this goal, we revisit masked language modeling and present a geometry-guided self-supervised learning method (Go-tuningfor short) by taking a small number of task-aware self-supervised data to update language models further. Experiments show that Go-tuning can enable T5-small (80M) competitive zero-shot results compared with large language models, such as T5-XL (3B). We also apply Go-tuning on multi-task settings and develop a multi-task model, mgo-T5 (250M). It can reach the average performance of OPT (175B) on 9 datasets.
translated by 谷歌翻译
我们引入了基于高斯工艺回归和边缘化图内核(GPR-MGK)的探索性主动学习(AL)算法,以最低成本探索化学空间。使用高通量分子动力学模拟生成数据和图神经网络(GNN)以预测,我们为热力学性质预测构建了一个主动学习分子模拟框架。在特定的靶向251,728个烷烃分子中,由4至19个碳原子及其液体物理特性组成:密度,热能和汽化焓,我们使用AL算法选择最有用的分子来代表化学空间。计算和实验测试集的验证表明,只有313个(占总数的0.124 \%)分子足以训练用于计算测试集的$ \ rm r^2> 0.99 $的精确GNN模型和$ \ rm rm r^2>>实验测试集0.94 $。我们重点介绍了提出的AL算法的两个优点:与高通量数据生成和可靠的不确定性量化的兼容性。
translated by 谷歌翻译
多模式情感分析和抑郁估计是两个重要的研究主题,旨在使用多模式数据预测人类精神状态。先前的研究重点是制定有效的融合策略,以交换和整合不同模式的与思想有关的信息。一些基于MLP的技术最近在各种计算机视觉任务中取得了巨大的成功。受到这一点的启发,我们探索了本研究中具有混合视角的多模式方法。为此,我们介绍了完全基于MLP的多模式特征处理框架CubeMLP。 CUBEMLP由三个独立的MLP单元组成,每个单元都有两个仿射转换。 CUBEMLP接受所有相关的模态特征作为输入,并在三个轴上混合它们。使用CubeMLP提取特性后,将混合的多模式特征扁平以进行任务预测。我们的实验是在情感分析数据集上进行的:CMU-MOSI和CMU-MOSEI,以及抑郁估计数据集:AVEC2019。结果表明,CUBEMLP可以以低得多的计算成本来实现最先进的性能。
translated by 谷歌翻译
我们为深神经网络引入了两个低位训练后训练量化(PTQ)方法,该方法满足硬件要求,并且不需要长期重新训练。两次量化的能力可以将通过量化和去除化引入的乘法转换为许多有效加速器采用的位移位。但是,两次量表因子的候选值较少,这会导致更多的舍入或剪辑错误。我们提出了一种新型的两个PTQ框架,称为RAPQ,该框架被动态调整了整个网络的两个尺度,而不是静态地确定它们一层。从理论上讲,它可以权衡整个网络的舍入错误和剪辑错误。同时,RAPQ中的重建方法基于每个单元的BN信息。对Imagenet的广泛实验证明了我们提出的方法的出色性能。没有铃铛和哨声,REPQ在RESNET-18和MOBILENETV2上的准确度可以达到65%和48%,分别具有INT2激活INT4的精度。我们是第一个为低位PTQ提出更受限制但对硬件友好型的两次量化方案的人,并证明它可以达到与SOTA PTQ方法几乎相同的准确性。该代码已发布。
translated by 谷歌翻译
虽然现有的脸部防欺骗(FAS)方法在域内实验中实现高精度,但由于普遍性较差,它们的效果严重陷入跨域情景。最近,已经探索了多种技巧,例如域泛化和代表性解剖。然而,改进仍然有限有两个问题:1)很难将所有面向共享特征空间的所有面。如果来自未知域的面不映射到共享特征空间中的已知区域,则会意外地获得不准确的预测。 2)很难完全考虑用于解剖学的各种欺骗痕迹。在本文中,我们提出了一个特征生成和假设验证框架来缓解两个问题。最重要的是,在FAS任务中第一次引入生成真实面和已知攻击的假设的特征生成网络。随后,应用两个假设验证模块来判断输入面是否分别来自真实面积和实体面分布。此外,给出了我们框架和贝叶斯不确定性估计之间关系的一些分析,为未知域中的可靠防御提供了理论支持。实验结果表明,我们的框架实现了有希望的结果,优于最先进的公共数据集的最先进的方法。
translated by 谷歌翻译
大多数现有的政策学习解决方案都需要学习代理商接收高质量的监督信号,如强化学习(RL)或行为克隆(BC)中的高质量专家演示。在实践中获得这些质量监督通常是不可行的或昂贵的昂贵。我们的目标是一个统一的框架,利用可用的廉价弱势监督,以有效地执行政策学习。为了处理这个问题,我们将“弱监督”视为来自同行代理的不完美信息,并根据与同行代理人的政策(而不是简单协议)的“相关协议”评估学习代理人的政策。我们的方法明确惩罚了对弱势监督的过度措施。除了理论担保之外,对具有嘈杂奖励的任务(包括嘈杂奖励)的广泛评估,具有薄弱的示范,标准政策共同培训表明我们的方法导致了实质性的性能改进,特别是当学习环境的复杂性或噪音很高时。
translated by 谷歌翻译
Diffusion model, a new generative modelling paradigm, has achieved great success in image, audio, and video generation. However, considering the discrete categorical nature of text, it is not trivial to extend continuous diffusion models to natural language, and text diffusion models are less studied. Sequence-to-sequence text generation is one of the essential natural language processing topics. In this work, we apply diffusion models to approach sequence-to-sequence text generation, and explore whether the superiority generation performance of diffusion model can transfer to natural language domain. We propose SeqDiffuSeq, a text diffusion model for sequence-to-sequence generation. SeqDiffuSeq uses an encoder-decoder Transformers architecture to model denoising function. In order to improve generation quality, SeqDiffuSeq combines the self-conditioning technique and a newly proposed adaptive noise schedule technique. The adaptive noise schedule has the difficulty of denoising evenly distributed across time steps, and considers exclusive noise schedules for tokens at different positional order. Experiment results illustrate the good performance on sequence-to-sequence generation in terms of text quality and inference time.
translated by 谷歌翻译
In this paper, we introduce a novel optimization algorithm for machine learning model training called Normalized Stochastic Gradient Descent (NSGD) inspired by Normalized Least Mean Squares (NLMS) from adaptive filtering. When we train a high-complexity model on a large dataset, the learning rate is significantly important as a poor choice of optimizer parameters can lead to divergence. The algorithm updates the new set of network weights using the stochastic gradient but with $\ell_1$ and $\ell_2$-based normalizations on the learning rate parameter similar to the NLMS algorithm. Our main difference from the existing normalization methods is that we do not include the error term in the normalization process. We normalize the update term using the input vector to the neuron. Our experiments present that the model can be trained to a better accuracy level on different initial settings using our optimization algorithm. In this paper, we demonstrate the efficiency of our training algorithm using ResNet-20 and a toy neural network on different benchmark datasets with different initializations. The NSGD improves the accuracy of the ResNet-20 from 91.96\% to 92.20\% on the CIFAR-10 dataset.
translated by 谷歌翻译
Language models with the Transformers structure have shown great performance in natural language processing. However, there still poses problems when fine-tuning pre-trained language models on downstream tasks, such as over-fitting or representation collapse. In this work, we propose HyPe, a simple yet effective fine-tuning technique to alleviate such problems by perturbing hidden representations of Transformers layers. Unlike previous works that only add noise to inputs or parameters, we argue that the hidden representations of Transformers layers convey more diverse and meaningful language information. Therefore, making the Transformers layers more robust to hidden representation perturbations can further benefit the fine-tuning of PLMs en bloc. We conduct extensive experiments and analyses on GLUE and other natural language inference datasets. Results demonstrate that HyPe outperforms vanilla fine-tuning and enhances generalization of hidden representations from different layers. In addition, HyPe acquires negligible computational overheads, and is better than and compatible with previous state-of-the-art fine-tuning techniques.
translated by 谷歌翻译
Detecting sarcasm and verbal irony from people's subjective statements is crucial to understanding their intended meanings and real sentiments and positions in social scenarios. This paper describes the X-PuDu system that participated in SemEval-2022 Task 6, iSarcasmEval - Intended Sarcasm Detection in English and Arabic, which aims at detecting intended sarcasm in various settings of natural language understanding. Our solution finetunes pre-trained language models, such as ERNIE-M and DeBERTa, under the multilingual settings to recognize the irony from Arabic and English texts. Our system ranked second out of 43, and ninth out of 32 in Task A: one-sentence detection in English and Arabic; fifth out of 22 in Task B: binary multi-label classification in English; first out of 16, and fifth out of 13 in Task C: sentence-pair detection in English and Arabic.
translated by 谷歌翻译